1,236 research outputs found

    Is a Large Intrinsic k_T Needed to Describe Photon + Jet Photoproduction at HERA?

    Full text link
    We study the photoproduction of an isolated photon and a jet based on a code of partonic event generator type which includes the full set of next-to-leading order corrections. We compare our results to a recent ZEUS analysis in which an effective k_T of the incoming partons has been determined. We find that no additional intrinsic k_T is needed to describe the data.Comment: 23 pages LaTeX, 12 figure

    Isolated prompt photon photoproduction at NLO

    Get PDF
    We present a full next-to-leading order code to calculate the photoproduction of prompt photons. The code is a general purpose program of partonic event generator type with large flexibility. We study the possibility to constrain the photon structure functions and comment on isolation issues. A comparison to ZEUS data is also shown.Comment: 22 pages LaTeX, 15 figure

    A NLO calculation of the hadron-jet cross section in photoproduction reactions

    Get PDF
    We study the photoproduction of large-p_T charged hadrons in e p collisions, both for the inclusive case and for the case where a jet in the final state is also measured. Our results are obtained by a NLO generator of partonic events. We discuss the sensitivity of the cross section to the renormalisation and factorisation scales, and to various fragmentation function parametrisations. The possibility to constrain the parton densities in the proton and in the photon is assessed. Comparisons are made with H1 data for inclusive charged hadron production.Comment: 28 pages LaTeX, 14 figure

    Tools for NLO automation: extension of the golem95C integral library

    Full text link
    We present an extension of the program golem95C for the numerical evaluation of scalar integrals and tensor form factors entering the calculation of one-loop amplitudes, which supports tensor ranks exceeding the number of propagators. This extension allows various applications in Beyond the Standard Model physics and effective theories, for example higher ranks due to propagators of spin two particles, or due to effective vertices. Complex masses are also supported. The program is not restricted to the Feynman diagrammatic approach, as it also contains routines to interface to unitarity-inspired numerical reconstruction of the integrand at the tensorial level. Therefore it can serve as a general integral library in automated programs to calculate one-loop amplitudes.Comment: 17 pages, 1 figure, the program can be downloaded from http://golem.hepforge.org/95/. arXiv admin note: substantial text overlap with arXiv:1101.559

    Shocks in dense clouds. IV. Effects of grain-grain processing on molecular line emission

    Full text link
    Grain-grain processing has been shown to be an indispensable ingredient of shock modelling in high density environments. For densities higher than \sim10^5 cm-3, shattering becomes a self-enhanced process that imposes severe chemical and dynamical consequences on the shock characteristics. Shattering is accompanied by the vaporization of grains, which can directly release SiO to the gas phase. Given that SiO rotational line radiation is used as a major tracer of shocks in dense clouds, it is crucial to understand the influence of vaporization on SiO line emission. We have developed a recipe for implementing the effects of shattering and vaporization into a 2-fluid shock model, resulting in a reduction of computation time by a factor \sim100 compared to a multi-fluid modelling approach. This implementation was combined with an LVG-based modelling of molecular line radiation transport. Using this model we calculated grids of shock models to explore the consequences of different dust-processing scenarios. Grain-grain processing is shown to have a strong influence on C-type shocks for a broad range of magnetic fields: they become hotter and thinner. The reduction in column density of shocked gas lowers the intensity of molecular lines, at the same time as higher peak temperatures increase the intensity of highly excited transitions compared to shocks without grain-grain processing. For OH the net effect is an increase in line intensities, while for CO and H2O it is the contrary. The intensity of H2 emission is decreased in low transitions and increased for highly excited lines. For all molecules, the highly excited lines become sensitive to the value of the magnetic field. Although vaporization increases the intensity of SiO rotational lines, this effect is weakened by the reduced shock width. The release of SiO early in the hot shock changes the excitation characteristics of SiO radiation.Comment: Published in Astronomy and Astrophysics (2013). 26 pages, 16 figures, 14 table

    Penetration and cratering experiments of graphite by 0.5-mm diameter steel spheres at various impact velocities

    Get PDF
    Cratering experiments have been conducted with 0.5-mm diameter AISI 52100 steel spherical projectiles and 30-mm diameter, 15-mm long graphite targets. The latter were made of a commercial grade of polycrystalline and porous graphite named EDM3 whose behavior is known as macroscopically isotropic. A two-stage light-gas gun launched the steel projectiles at velocities between 1.1 and 4.5 km s 1. In most cases, post-mortem tomographies revealed that the projectile was trapped, fragmented or not, inside the target. It showed that the apparent crater size and depth increase with the impact velocity. This is also the case of the crater volume which appears to follow a power law significantly different from those constructed in previous works for similar impact conditions and materials. Meanwhile, the projectile depth of penetration starts to decrease at velocities beyond 2.2 km s 1. This is firstly because of its plastic deformation and then, beyond 3.2 km s 1, because of its fragmentation. In addition to these three regimes of penetration behavior already described by a few authors, we suggest a fourth regime in which the projectile melting plays a significant role at velocities above 4.1 km s 1. A discussion of these four regimes is provided and indicates that each phenomenon may account for the local evolution of the depth of penetration

    A compact representation of the 2 photon 3 gluon amplitude

    Full text link
    A compact representation of the loop amplitude gamma gamma ggg -> 0 is presented. The result has been obtained by using helicity methods and sorting with respect to an irreducible function basis. We show how to convert spinor representations into a field strength representation of the amplitude. The amplitude defines a background contribution for Higgs boson searches at the LHC in the channel H -> gamma gamma + jet which was earlier extracted indirectly from the one-loop representation of the 5-gluon amplitude.Comment: 15 pages Latex, 6 eps files included, revised versio

    Next-to-leading order multi-leg processes for the Large Hadron Collider

    Get PDF
    In this talk we discuss recent progress concerning precise predictions for the LHC. We give a status report of three applications of our method to deal with multi-leg one-loop amplitudes: The interference term of Higgs production by gluon- and weak boson fusion to order O(alpha^2 alpha_s^3) and the next-to-leading order corrections to the two processes pp -> ZZ jet and u ubar -> d dbar s sbar. The latter is a subprocess of the four jet cross section at the LHC.Comment: 6 pages, 5 figures. Talk given at the 8th international Symposium on Radiative Corrections (RADCOR), October 1-5 2007, Florence, Ital

    Camera orientation, calibration and inverse perspective with uncertainties: A Bayesian method applied to area estimation from diverse photographs

    Get PDF
    This is the author accepted manuscript. The final version is available from the publisher via the DOI in this recordSome map data is copyrighted by OpenStreetMap contributors and available from https://www.openstreetmap.org.Large collections of images have become readily available through modern digital catalogs, from sources as diverse as historical photographs, aerial surveys, or user-contributed pictures. Exploiting the quantitative information present in such wide-ranging collections can greatly benefit studies that follow the evolution of landscape features over decades, such as measuring areas of glaciers to study their shrinking under climate change. However, many available images were taken with low-quality lenses and unknown camera parameters. Useful quantitative data may still be extracted, but it becomes important to both account for imperfect optics, and estimate the uncertainty of the derived quantities. In this paper, we present a method to address both these goals, and apply it to the estimation of the area of a landscape feature traced as a polygon on the image of interest. The technique is based on a Bayesian formulation of the camera calibration problem. First, the probability density function (PDF) of the unknown camera parameters is determined for the image, based on matches between 2D (image) and 3D (world) points together with any available prior information. In a second step, the posterior distribution of the feature area of interest is derived from the PDF of camera parameters. In this step, we also model systematic errors arising in the polygon tracing process, as well as uncertainties in the digital elevation model. The resulting area PDF therefore accounts for most sources of uncertainty. We present validation experiments, and show that the model produces accurate and consistent results. We also demonstrate that in some cases, accounting for optical lens distortions is crucial for accurate area determination with consumer-grade lenses. The technique can be applied to many other types of quantitative features to be extracted from photographs when careful error estimation is important.Agence Nationale de la Recherche (ANR
    • …
    corecore